This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 11-01 05:43:03] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 11-01 05:43:04] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 11-01 05:43:04] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 11-01 05:43:04] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters.
[INFO 11-01 05:43:04] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
/home/runner/work/Ax/Ax/ax/core/observation.py:274: FutureWarning:
In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.350365, 'x2': 0.534914, 'x3': 0.28581, 'x4': 0.031733, 'x5': 0.346707, 'x6': 0.265801}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.207931, 0.1), 'l2norm': (0.952312, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.764463, 'x2': 0.487075, 'x3': 0.899911, 'x4': 0.095333, 'x5': 0.809938, 'x6': 0.051847}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.004631, 0.1), 'l2norm': (1.449633, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.996708, 'x2': 0.327153, 'x3': 0.604077, 'x4': 0.002915, 'x5': 0.197169, 'x6': 0.219705}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.05411, 0.1), 'l2norm': (1.114938, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.119037, 'x2': 0.15247, 'x3': 0.023739, 'x4': 0.875672, 'x5': 0.282641, 'x6': 0.851716}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (0.063051, 0.1), 'l2norm': (1.272584, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.353731, 'x2': 0.402018, 'x3': 0.017664, 'x4': 0.384639, 'x5': 0.626059, 'x6': 0.573664}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.349724, 0.1), 'l2norm': (1.115278, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.635563, 'x2': 0.784552, 'x3': 0.207476, 'x4': 0.253796, 'x5': 0.698805, 'x6': 0.643259}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (0.017883, 0.1), 'l2norm': (1.371914, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.614297, 'x2': 0.825066, 'x3': 0.995075, 'x4': 0.952989, 'x5': 0.601908, 'x6': 0.35429}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (0.087708, 0.1), 'l2norm': (1.826406, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.512693, 'x2': 0.25607, 'x3': 0.487293, 'x4': 0.320386, 'x5': 0.27707, 'x6': 0.553621}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-2.214048, 0.1), 'l2norm': (1.07091, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.274458, 'x2': 0.129143, 'x3': 0.799182, 'x4': 0.390762, 'x5': 0.741255, 'x6': 0.626882}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.284659, 0.1), 'l2norm': (1.340822, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.054833, 'x2': 0.526752, 'x3': 0.941861, 'x4': 0.455977, 'x5': 0.373662, 'x6': 0.694011}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.919214, 0.1), 'l2norm': (1.335815, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.555977, 'x2': 0.342138, 'x3': 0.961406, 'x4': 0.762409, 'x5': 0.407304, 'x6': 0.269143}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.10288, 0.1), 'l2norm': (1.583831, 0.1)}.
[INFO 11-01 05:43:04] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.220379, 'x2': 0.457293, 'x3': 0.863866, 'x4': 0.937949, 'x5': 0.164575, 'x6': 0.2981}.
[INFO 11-01 05:43:04] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (0.127552, 0.1), 'l2norm': (1.300646, 0.1)}.
[INFO 11-01 05:43:21] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.423885, 'x2': 0.257463, 'x3': 0.491894, 'x4': 0.329372, 'x5': 0.236285, 'x6': 0.588772}.
[INFO 11-01 05:43:21] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-2.131382, 0.1), 'l2norm': (1.021311, 0.1)}.
[INFO 11-01 05:43:33] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.542013, 'x2': 0.266699, 'x3': 0.543116, 'x4': 0.365995, 'x5': 0.246394, 'x6': 0.586602}.
[INFO 11-01 05:43:33] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-1.851059, 0.1), 'l2norm': (1.253614, 0.1)}.
[INFO 11-01 05:43:41] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.448035, 'x2': 0.200818, 'x3': 0.430994, 'x4': 0.300237, 'x5': 0.262921, 'x6': 0.521641}.
[INFO 11-01 05:43:41] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-2.218112, 0.1), 'l2norm': (0.890701, 0.1)}.
[INFO 11-01 05:43:44] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.446234, 'x2': 0.200792, 'x3': 0.496819, 'x4': 0.271282, 'x5': 0.293723, 'x6': 0.589696}.
[INFO 11-01 05:43:44] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-2.403621, 0.1), 'l2norm': (0.975797, 0.1)}.
[INFO 11-01 05:43:46] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.481601, 'x2': 0.19934, 'x3': 0.409951, 'x4': 0.26688, 'x5': 0.293139, 'x6': 0.642039}.
[INFO 11-01 05:43:46] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-2.732715, 0.1), 'l2norm': (0.887178, 0.1)}.
[INFO 11-01 05:43:48] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.511499, 'x2': 0.188659, 'x3': 0.386829, 'x4': 0.220734, 'x5': 0.267558, 'x6': 0.678196}.
[INFO 11-01 05:43:48] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-2.181253, 0.1), 'l2norm': (0.946872, 0.1)}.
[INFO 11-01 05:43:49] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.457831, 'x2': 0.186511, 'x3': 0.38762, 'x4': 0.298546, 'x5': 0.325593, 'x6': 0.663531}.
[INFO 11-01 05:43:49] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-2.490693, 0.1), 'l2norm': (0.825315, 0.1)}.
[INFO 11-01 05:43:50] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.467175, 'x2': 0.274402, 'x3': 0.393564, 'x4': 0.27387, 'x5': 0.311172, 'x6': 0.637922}.
[INFO 11-01 05:43:50] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-2.446052, 0.1), 'l2norm': (0.86947, 0.1)}.
The plot below shows the response surface for hartmann6 metric as a function of the x1, x2 parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
[INFO 11-01 05:43:50] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 1 minutes, 11.03 seconds.